RAPID: A Belief Convergence Strategy for Collaborating with Inconsistent Agents
نویسندگان
چکیده
Maintaining an accurate set of beliefs in a partially observable scenario, particularly with respect to other agents operating in the same space, is a vital aspect of multiagent planning. We analyze how the beliefs of an agent can be updated for fast adaptivity to changes in the behavior of an unknown teammate. The main contribution of this paper is the empirical evaluation of an agent cooperating with a teammate whose goals change periodically. We test our approach in a collaborative multiagent domain where identification of goals is necessary for successful completion. The belief revision technique we propose outperforms the traditional approach in a majority of test cases. Additionally, our results suggest the ability to approximate a higher level model by utilizing a belief distribution over a set of lower level behaviors, particularly when the belief update strategy identifies changes in the behavior in a responsive manner.
منابع مشابه
Tuning Belief Revision for Coordination with Inconsistent Teammates
Coordination with an unknown human teammate is a notable challenge for cooperative agents. Behavior of human players in games with cooperating AI agents is often sub-optimal and inconsistent leading to choreographed and limited cooperative scenarios in games. This paper considers the difficulty of cooperating with a teammate whose goal and corresponding behavior change periodically. Previous wo...
متن کاملAnalysis of mental attributes for the conflict resolution in multiagent systems
This study addresses the problem of constructing an adequate strategy for conflict resolution in the situations with high uncertainty and inconsistent goals. We develop the formal representation for such concepts as informing, deceiving, explaining, offending, forgiving, pretending, reputation etc. to model the various four, as of multiagent conflict behavior. These concepts are defined in the ...
متن کاملBelief change with noisy sensing in the situation calculus
Situation calculus has been applied widely in artificial intelligence to model and reason about actions and changes in dynamic systems. Since actions carried out by agents will cause constant changes of the agents’ beliefs, how to manage these changes is a very important issue. Shapiro et al. [22] is one of the studies that considered this issue. However, in this framework, the problem of noisy...
متن کاملLearning, information, and sorting in market entry games: theory and evidence
Previous data from experiments on market entry games, N—player games where each player faces a choice between entering a market and staying out, appear inconsistent with either mixed or pure Nash equilibria. Here we show that, in this class of game, learning theory predicts sorting, that is, in the long run, agents play a pure strategy equilibrium with some agents permanently in the market, and...
متن کاملPlan-belief Revision in Jason
When information is shared between agents of unknown reliability, it is possible that their belief bases become inconsistent. In such cases, the belief base must be revised to restore consistency, so that the agent is able to reason. In some cases the inconsistent information may be due to use of incorrect plans. We extend work by Alechina et al. to revise belief bases in which plans can be dyn...
متن کامل